288 research outputs found

    Blind source separation using temporal predictability

    Get PDF
    A measure of temporal predictability is defined and used to separate linear mixtures of signals. Given any set of statistically independent source signals, it is conjectured here that a linear mixture of those signals has the following property: the temporal predictability of any signal mixture is less than (or equal to) that of any of its component source signals. It is shown that this property can be used to recover source signals from a set of linear mixtures of those signals by finding an un-mixing matrix that maximizes a measure of temporal predictability for each recovered signal. This matrix is obtained as the solution to a generalized eigenvalue problem; such problems have scaling characteristics of O (N3), where N is the number of signal mixtures. In contrast to independent component analysis, the temporal predictability method requires minimal assumptions regarding the probability density functions of source signals. It is demonstrated that the method can separate signal mixtures in which each mixture is a linear combination of source signals with supergaussian, sub-gaussian, and gaussian probability density functions and on mixtures of voices and music

    Decorrelation control by the cerebellum achieves oculomotor plant compensation in simulated vestibulo-ocular reflex

    Get PDF
    We introduce decorrelation control as a candidate algorithm for the cerebellar microcircuit and demonstrate its utility for oculomotor plant compensation in a linear model of the vestibulo-ocular reflex (VOR). Using an adaptive-filter representation of cerebellar cortex and an anti-Hebbian learning rule, the algorithm learnt to compensate for the oculomotor plant by minimizing correlations between a predictor variable (eye-movement command) and a target variable (retinal slip), without requiring a motor-error signal. Because it also provides an estimate of the unpredicted component of the target variable, decorrelation control can simplify both motor coordination and sensory acquisition. It thus unifies motor and sensory cerebellar functions

    Principles of Neural Information Theory A Tutorial Introduction

    Get PDF
    The human brain is the most complex computational machine known to science, even though its components (neurons) are slow and unreliable compared to a laptop computer. In this richly illustrated book, Shannon's mathematical theory of information is used to explore the computational efficiency of neurons, with special reference to visual perception. A diverse range of examples is used to show how information theory effectively defines fundamental and unbreachable limits on neural efficiency; limits which ultimately determine the neuroanatomical microstructure of the eye and brain. Written in an informal style, with a comprehensive glossary and tutorial appendices, this book is ideal for novices who wish to understand the essential principles of neural information theory

    Information Theory: A Tutorial Introduction

    Get PDF
    Shannon's mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon's theory. An annotated reading list is provided for further reading

    When is now? Perception of simultaneity

    Get PDF
    We address the following question: Is there a difference (D) between the amount of time for auditory and visual stimuli to be perceived? On each of 1000 trials, observers were presented with a light-sound pair, separated by a stimulus onset asynchrony (SOA) between-250 ms (sound first) and 250 ms. Observers indicated if the light-sound pair came on simultaneously by pressing one of two (yes or no) keys. The SOA most likely to yield affirmative responses was defined as the point of subjective simultaneity (PSS). PSS values were between-21 ms (i.e. sound 21ms before light) and 150 ms. Evidence is presented that each PSS is observer specific. In a second experiment, each observer was tested using two observerstimulus distances. The resultant PSS values are highly correlated (r = 0.954, p = 0.003) suggesting that each observer's PSS is stable. PSS values were significantly affected by observer-stimulus distance, suggesting that observers do not take account of changes in distance on the resultant difference in arrival times of light and sound. The difference RTd in simple reaction time to single visual and auditory stimuli was also estimated; no evidence that RTd is observer specific or stable was found. The implications of these findings for the perception of multisensory stimuli are discussed

    Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing

    Get PDF
    The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. We present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. We describe the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers

    Large-Eddy Simulations of Magnetohydrodynamic Turbulence in Heliophysics and Astrophysics

    Get PDF
    We live in an age in which high-performance computing is transforming the way we do science. Previously intractable problems are now becoming accessible by means of increasingly realistic numerical simulations. One of the most enduring and most challenging of these problems is turbulence. Yet, despite these advances, the extreme parameter regimes encountered in space physics and astrophysics (as in atmospheric and oceanic physics) still preclude direct numerical simulation. Numerical models must take a Large Eddy Simulation (LES) approach, explicitly computing only a fraction of the active dynamical scales. The success of such an approach hinges on how well the model can represent the subgrid-scales (SGS) that are not explicitly resolved. In addition to the parameter regime, heliophysical and astrophysical applications must also face an equally daunting challenge: magnetism. The presence of magnetic fields in a turbulent, electrically conducting fluid flow can dramatically alter the coupling between large and small scales, with potentially profound implications for LES/SGS modeling. In this review article, we summarize the state of the art in LES modeling of turbulent magnetohydrodynamic (MHD) ows. After discussing the nature of MHD turbulence and the small-scale processes that give rise to energy dissipation, plasma heating, and magnetic reconnection, we consider how these processes may best be captured within an LES/SGS framework. We then consider several special applications in heliophysics and astrophysics, assessing triumphs, challenges,and future directions

    State of the climate in 2013

    Get PDF
    In 2013, the vast majority of the monitored climate variables reported here maintained trends established in recent decades. ENSO was in a neutral state during the entire year, remaining mostly on the cool side of neutral with modest impacts on regional weather patterns around the world. This follows several years dominated by the effects of either La Niña or El Niño events. According to several independent analyses, 2013 was again among the 10 warmest years on record at the global scale, both at the Earths surface and through the troposphere. Some regions in the Southern Hemisphere had record or near-record high temperatures for the year. Australia observed its hottest year on record, while Argentina and New Zealand reported their second and third hottest years, respectively. In Antarctica, Amundsen-Scott South Pole Station reported its highest annual temperature since records began in 1957. At the opposite pole, the Arctic observed its seventh warmest year since records began in the early 20th century. At 20-m depth, record high temperatures were measured at some permafrost stations on the North Slope of Alaska and in the Brooks Range. In the Northern Hemisphere extratropics, anomalous meridional atmospheric circulation occurred throughout much of the year, leading to marked regional extremes of both temperature and precipitation. Cold temperature anomalies during winter across Eurasia were followed by warm spring temperature anomalies, which were linked to a new record low Eurasian snow cover extent in May. Minimum sea ice extent in the Arctic was the sixth lowest since satellite observations began in 1979. Including 2013, all seven lowest extents on record have occurred in the past seven years. Antarctica, on the other hand, had above-average sea ice extent throughout 2013, with 116 days of new daily high extent records, including a new daily maximum sea ice area of 19.57 million km2 reached on 1 October. ENSO-neutral conditions in the eastern central Pacific Ocean and a negative Pacific decadal oscillation pattern in the North Pacific had the largest impacts on the global sea surface temperature in 2013. The North Pacific reached a historic high temperature in 2013 and on balance the globally-averaged sea surface temperature was among the 10 highest on record. Overall, the salt content in nearsurface ocean waters increased while in intermediate waters it decreased. Global mean sea level continued to rise during 2013, on pace with a trend of 3.2 mm yr-1 over the past two decades. A portion of this trend (0.5 mm yr-1) has been attributed to natural variability associated with the Pacific decadal oscillation as well as to ongoing contributions from the melting of glaciers and ice sheets and ocean warming. Global tropical cyclone frequency during 2013 was slightly above average with a total of 94 storms, although the North Atlantic Basin had its quietest hurricane season since 1994. In the Western North Pacific Basin, Super Typhoon Haiyan, the deadliest tropical cyclone of 2013, had 1-minute sustained winds estimated to be 170 kt (87.5 m s-1) on 7 November, the highest wind speed ever assigned to a tropical cyclone. High storm surge was also associated with Haiyan as it made landfall over the central Philippines, an area where sea level is currently at historic highs, increasing by 200 mm since 1970. In the atmosphere, carbon dioxide, methane, and nitrous oxide all continued to increase in 2013. As in previous years, each of these major greenhouse gases once again reached historic high concentrations. In the Arctic, carbon dioxide and methane increased at the same rate as the global increase. These increases are likely due to export from lower latitudes rather than a consequence of increases in Arctic sources, such as thawing permafrost. At Mauna Loa, Hawaii, for the first time since measurements began in 1958, the daily average mixing ratio of carbon dioxide exceeded 400 ppm on 9 May. The state of these variables, along with dozens of others, and the 2013 climate conditions of regions around the world are discussed in further detail in this 24th edition of the State of the Climate series. © 2014, American Meteorological Society. All rights reserved

    Measurement of CP observables in B± → D(⁎)K± and B± → D(⁎)π± decays

    Get PDF
    Measurements of CP observables in B ± →D (⁎) K ± and B ± →D (⁎) π ± decays are presented, where D (⁎) indicates a neutral D or D ⁎ meson that is an admixture of D (⁎)0 and DÂŻ (⁎)0 states. Decays of the D ⁎ meson to the Dπ 0 and DÎł final states are partially reconstructed without inclusion of the neutral pion or photon, resulting in distinctive shapes in the B candidate invariant mass distribution. Decays of the D meson are fully reconstructed in the K ± π ∓ , K + K − and π + π − final states. The analysis uses a sample of charged B mesons produced in pp collisions collected by the LHCb experiment, corresponding to an integrated luminosity of 2.0, 1.0 and 2.0 fb −1 taken at centre-of-mass energies of s=7, 8 and 13 TeV, respectively. The study of B ± →D ⁎ K ± and B ± →D ⁎ π ± decays using a partial reconstruction method is the first of its kind, while the measurement of B ± →DK ± and B ± →Dπ ± decays is an update of previous LHCb measurements. The B ± →DK ± results are the most precise to date
    • 

    corecore